Goto

Collaborating Authors

 source data


Fine-tuning Factor Augmented Neural Lasso for Heterogeneous Environments

Chai, Jinhang, Fan, Jianqing, Gao, Cheng, Yin, Qishuo

arXiv.org Machine Learning

Fine-tuning is a widely used strategy for adapting pre-trained models to new tasks, yet its methodology and theoretical properties in high-dimensional nonparametric settings with variable selection have not yet been developed. This paper introduces the fine-tuning factor augmented neural Lasso (FAN-Lasso), a transfer learning framework for high-dimensional nonparametric regression with variable selection that simultaneously handles covariate and posterior shifts. We use a low-rank factor structure to manage high-dimensional dependent covariates and propose a novel residual fine-tuning decomposition in which the target function is expressed as a transformation of a frozen source function and other variables to achieve transfer learning and nonparametric variable selection. This augmented feature from the source predictor allows for the transfer of knowledge to the target domain and reduces model complexity there. We derive minimax-optimal excess risk bounds for the fine-tuning FAN-Lasso, characterizing the precise conditions, in terms of relative sample sizes and function complexities, under which fine-tuning yields statistical acceleration over single-task learning. The proposed framework also provides a theoretical perspective on parameter-efficient fine-tuning methods. Extensive numerical experiments across diverse covariate- and posterior-shift scenarios demonstrate that the fine-tuning FAN-Lasso consistently outperforms standard baselines and achieves near-oracle performance even under severe target sample size constraints, empirically validating the derived rates.


Transfer Learning in Bayesian Optimization for Aircraft Design

Tfaily, Ali, Diouane, Youssef, Bartoli, Nathalie, Kokkolaras, Michael

arXiv.org Machine Learning

The use of transfer learning within Bayesian optimization addresses the disadvantages of the so-called \textit{cold start} problem by using source data to aid in the optimization of a target problem. We present a method that leverages an ensemble of surrogate models using transfer learning and integrates it in a constrained Bayesian optimization framework. We identify challenges particular to aircraft design optimization related to heterogeneous design variables and constraints. We propose the use of a partial-least-squares dimension reduction algorithm to address design space heterogeneity, and a \textit{meta} data surrogate selection method to address constraint heterogeneity. Numerical benchmark problems and an aircraft conceptual design optimization problem are used to demonstrate the proposed methods. Results show significant improvement in convergence in early optimization iterations compared to standard Bayesian optimization, with improved prediction accuracy for both objective and constraint surrogate models.


AdversarialReweightingforPartial DomainAdaptation

Neural Information Processing Systems

Theconventional closed-set DAmethods generally assume that the source and target domains share the same label space. However, this assumption is often not realistic in practice.


Confident-Anchor-InducedMulti-Source-Free DomainAdaptation

Neural Information Processing Systems

Unsupervised domain adaptation has attracted appealing academic attentions by transferring knowledge from labeled source domain to unlabeled target domain.





d5c04aa72b92c53bda5b525b60958295-Supplemental-Conference.pdf

Neural Information Processing Systems

Westudy linear regression under covariate shift, where themarginal distribution over the input covariates differs in the source and the target domains, while the conditional distribution of the output given the input covariates is similar across thetwodomains.